456 research outputs found
Adaptive Measurement Network for CS Image Reconstruction
Conventional compressive sensing (CS) reconstruction is very slow for its
characteristic of solving an optimization problem. Convolu- tional neural
network can realize fast processing while achieving compa- rable results. While
CS image recovery with high quality not only de- pends on good reconstruction
algorithms, but also good measurements. In this paper, we propose an adaptive
measurement network in which measurement is obtained by learning. The new
network consists of a fully-connected layer and ReconNet. The fully-connected
layer which has low-dimension output acts as measurement. We train the
fully-connected layer and ReconNet simultaneously and obtain adaptive
measurement. Because the adaptive measurement fits dataset better, in contrast
with random Gaussian measurement matrix, under the same measuremen- t rate, it
can extract the information of scene more efficiently and get better
reconstruction results. Experiments show that the new network outperforms the
original one.Comment: 11pages,8figure
lp-Recovery of the Most Significant Subspace among Multiple Subspaces with Outliers
We assume data sampled from a mixture of d-dimensional linear subspaces with
spherically symmetric distributions within each subspace and an additional
outlier component with spherically symmetric distribution within the ambient
space (for simplicity we may assume that all distributions are uniform on their
corresponding unit spheres). We also assume mixture weights for the different
components. We say that one of the underlying subspaces of the model is most
significant if its mixture weight is higher than the sum of the mixture weights
of all other subspaces. We study the recovery of the most significant subspace
by minimizing the lp-averaged distances of data points from d-dimensional
subspaces, where p>0. Unlike other lp minimization problems, this minimization
is non-convex for all p>0 and thus requires different methods for its analysis.
We show that if 0<p<=1, then for any fraction of outliers the most significant
subspace can be recovered by lp minimization with overwhelming probability
(which depends on the generating distribution and its parameters). We show that
when adding small noise around the underlying subspaces the most significant
subspace can be nearly recovered by lp minimization for any 0<p<=1 with an
error proportional to the noise level. On the other hand, if p>1 and there is
more than one underlying subspace, then with overwhelming probability the most
significant subspace cannot be recovered or nearly recovered. This last result
does not require spherically symmetric outliers.Comment: This is a revised version of the part of 1002.1994 that deals with
single subspace recovery. V3: Improved estimates (in particular for Lemma 3.1
and for estimates relying on it), asymptotic dependence of probabilities and
constants on D and d and further clarifications; for simplicity it assumes
uniform distributions on spheres. V4: minor revision for the published
versio
Necessary and sufficient conditions of solution uniqueness in minimization
This paper shows that the solutions to various convex minimization
problems are \emph{unique} if and only if a common set of conditions are
satisfied. This result applies broadly to the basis pursuit model, basis
pursuit denoising model, Lasso model, as well as other models that
either minimize or impose the constraint , where
is a strictly convex function. For these models, this paper proves that,
given a solution and defining I=\supp(x^*) and s=\sign(x^*_I),
is the unique solution if and only if has full column rank and there
exists such that and for . This
condition is previously known to be sufficient for the basis pursuit model to
have a unique solution supported on . Indeed, it is also necessary, and
applies to a variety of other models. The paper also discusses ways to
recognize unique solutions and verify the uniqueness conditions numerically.Comment: 6 pages; revised version; submitte
Super-resolution far-field ghost imaging via compressive sampling
Much more image details can be resolved by improving the system's imaging
resolution and enhancing the resolution beyond the system's Rayleigh
diffraction limit is generally called super-resolution. By combining the sparse
prior property of images with the ghost imaging method, we demonstrated
experimentally that super-resolution imaging can be nonlocally achieved in the
far field even without looking at the object. Physical explanation of
super-resolution ghost imaging via compressive sampling and its potential
applications are also discussed.Comment: 4pages,4figure
Sparse Randomized Kaczmarz for Support Recovery of Jointly Sparse Corrupted Multiple Measurement Vectors
While single measurement vector (SMV) models have been widely studied in
signal processing, there is a surging interest in addressing the multiple
measurement vectors (MMV) problem. In the MMV setting, more than one
measurement vector is available and the multiple signals to be recovered share
some commonalities such as a common support. Applications in which MMV is a
naturally occurring phenomenon include online streaming, medical imaging, and
video recovery. This work presents a stochastic iterative algorithm for the
support recovery of jointly sparse corrupted MMV. We present a variant of the
Sparse Randomized Kaczmarz algorithm for corrupted MMV and compare our proposed
method with an existing Kaczmarz type algorithm for MMV problems. We also
showcase the usefulness of our approach in the online (streaming) setting and
provide empirical evidence that suggests the robustness of the proposed method
to the distribution of the corruption and the number of corruptions occurring.Comment: 13 pages, 6 figure
Robust Matrix Completion
This paper considers the problem of recovery of a low-rank matrix in the
situation when most of its entries are not observed and a fraction of observed
entries are corrupted. The observations are noisy realizations of the sum of a
low rank matrix, which we wish to recover, with a second matrix having a
complementary sparse structure such as element-wise or column-wise sparsity. We
analyze a class of estimators obtained by solving a constrained convex
optimization problem that combines the nuclear norm and a convex relaxation for
a sparse constraint. Our results are obtained for the simultaneous presence of
random and deterministic patterns in the sampling scheme. We provide guarantees
for recovery of low-rank and sparse components from partial and corrupted
observations in the presence of noise and show that the obtained rates of
convergence are minimax optimal
Scalable and Interpretable One-class SVMs with Deep Learning and Random Fourier features
One-class support vector machine (OC-SVM) for a long time has been one of the
most effective anomaly detection methods and extensively adopted in both
research as well as industrial applications. The biggest issue for OC-SVM is
yet the capability to operate with large and high-dimensional datasets due to
optimization complexity. Those problems might be mitigated via dimensionality
reduction techniques such as manifold learning or autoencoder. However,
previous work often treats representation learning and anomaly prediction
separately. In this paper, we propose autoencoder based one-class support
vector machine (AE-1SVM) that brings OC-SVM, with the aid of random Fourier
features to approximate the radial basis kernel, into deep learning context by
combining it with a representation learning architecture and jointly exploit
stochastic gradient descent to obtain end-to-end training. Interestingly, this
also opens up the possible use of gradient-based attribution methods to explain
the decision making for anomaly detection, which has ever been challenging as a
result of the implicit mappings between the input space and the kernel space.
To the best of our knowledge, this is the first work to study the
interpretability of deep learning in anomaly detection. We evaluate our method
on a wide range of unsupervised anomaly detection tasks in which our end-to-end
training architecture achieves a performance significantly better than the
previous work using separate training.Comment: Accepted at European Conference on Machine Learning and Principles
and Practice of Knowledge Discovery in Databases (ECML-PKDD) 201
Edge detection in microscopy images using curvelets
BACKGROUND: Despite significant progress in imaging technologies, the efficient detection of edges and elongated features in images of intracellular and multicellular structures acquired using light or electron microscopy is a challenging and time consuming task in many laboratories. RESULTS: We present a novel method, based on the discrete curvelet transform, to extract a directional field from the image that indicates the location and direction of the edges. This directional field is then processed using the non-maximal suppression and thresholding steps of the Canny algorithm to trace along the edges and mark them. Optionally, the edges may then be extended along the directions given by the curvelets to provide a more connected edge map. We compare our scheme to the Canny edge detector and an edge detector based on Gabor filters, and show that our scheme performs better in detecting larger, elongated structures possibly composed of several step or ridge edges. CONCLUSION: The proposed curvelet based edge detection is a novel and competitive approach for imaging problems. We expect that the methodology and the accompanying software will facilitate and improve edge detection in images available using light or electron microscopy
Accurate Optimization of Weighted Nuclear Norm for Non-Rigid Structure from Motion
Fitting a matrix of a given rank to data in a least squares sense can be done
very effectively using 2nd order methods such as Levenberg-Marquardt by
explicitly optimizing over a bilinear parameterization of the matrix. In
contrast, when applying more general singular value penalties, such as weighted
nuclear norm priors, direct optimization over the elements of the matrix is
typically used. Due to non-differentiability of the resulting objective
function, first order sub-gradient or splitting methods are predominantly used.
While these offer rapid iterations it is well known that they become inefficent
near the minimum due to zig-zagging and in practice one is therefore often
forced to settle for an approximate solution.
In this paper we show that more accurate results can in many cases be
achieved with 2nd order methods. Our main result shows how to construct
bilinear formulations, for a general class of regularizers including weighted
nuclear norm penalties, that are provably equivalent to the original problems.
With these formulations the regularizing function becomes twice differentiable
and 2nd order methods can be applied. We show experimentally, on a number of
structure from motion problems, that our approach outperforms state-of-the-art
methods
- …